Goto

Collaborating Authors

 hypernetwork model




Scale-Space Hypernetworks for Efficient Biomedical Imaging

Ortiz, Jose Javier Gonzalez, Guttag, John, Dalca, Adrian

arXiv.org Artificial Intelligence

Convolutional Neural Networks (CNNs) are the predominant model used for a variety of medical image analysis tasks. At inference time, these models are computationally intensive, especially with volumetric data. In principle, it is possible to trade accuracy for computational efficiency by manipulating the rescaling factor in the downsample and upsample layers of CNN architectures. However, properly exploring the accuracy-efficiency trade-off is prohibitively expensive with existing models. To address this, we introduce Scale-Space HyperNetworks (SSHN), a method that learns a spectrum of CNNs with varying internal rescaling factors. A single SSHN characterizes an entire Pareto accuracy-efficiency curve of models that match, and occasionally surpass, the outcomes of training many separate networks with fixed rescaling factors. We demonstrate the proposed approach in several medical image analysis applications, comparing SSHN against strategies with both fixed and dynamic rescaling factors. We find that SSHN consistently provides a better accuracy-efficiency trade-off at a fraction of the training cost. Trained SSHNs enable the user to quickly choose a rescaling factor that appropriately balances accuracy and computational efficiency for their particular needs at inference.


Modeling Situated Conversations for a Child-Care Robot Using Wearable Devices

On, Kyoung-Woon (Seoul National University) | Kim, Eun-Sol (Seoul National University) | Zhang, Byoung-Tak (Seoul National University)

AAAI Conferences

How can robots fluently communicate with humans and have context-preserving conversation? It is the most momentous and crucial problem in robotics research, especially for service robots such as child-care robots. Here, we aim to develop a situated conversation system for child-care robots. The conversation system considers the current context between robots and children as well as the situation the child is in. The system consists of two parts. The first part tries to understand the context. This part uses the embedded sensors of robots to understand the context and wearable sensors of the child for getting information of the situation the child is in. The second part is to generate the situated conversation. In terms of the model, we designed a hierarchical Bayesian Network for the first part and a Hypernetwork model is used for the second part. We illustrate the application of communication with a child in a child-care service robots scenario. For this application, we collect wearable sensors’ data from the child and mother-child conversation data in daily life. Finally, we discuss our results and future works.